Skip to content

[AArch64] Remove redundant FMOV for zero-extended i32/i16 loads to f64 #146920

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Amichaxx
Copy link

@Amichaxx Amichaxx commented Jul 3, 2025

Previously, a separate load, zext and FMOV instruction was emitted. This patch adds a new TableGen pattern to avoid the unnecessary FMOV. A test is included in test/CodeGen/AArch64/load_u64_from_u32.ll

Copy link

github-actions bot commented Jul 3, 2025

Thank you for submitting a Pull Request (PR) to the LLVM Project!

This PR will be automatically labeled and the relevant teams will be notified.

If you wish to, you can add reviewers by using the "Reviewers" section on this page.

If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using @ followed by their GitHub username.

If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers.

If you have further questions, they may be answered by the LLVM GitHub User Guide.

You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums.

@llvmbot
Copy link
Member

llvmbot commented Jul 3, 2025

@llvm/pr-subscribers-backend-aarch64

Author: Amina Chabane (Amichaxx)

Changes

Previously, a separate load, zext and FMOV instruction was emitted. This patch adds a new TableGen pattern to avoid the unnecessary FMOV. A test is included in test/CodeGen/AArch64/load_u64_from_u32.ll


Full diff: https://github.com/llvm/llvm-project/pull/146920.diff

2 Files Affected:

  • (modified) llvm/lib/Target/AArch64/AArch64InstrInfo.td (+6-1)
  • (added) llvm/test/CodeGen/AArch64/load_u64_from_u32.ll (+14)
diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.td b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
index efe6cc1aa8aec..2b75e38232384 100644
--- a/llvm/lib/Target/AArch64/AArch64InstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.td
@@ -3913,6 +3913,10 @@ defm LDRSW  : LoadUI<0b10, 0, 0b10, GPR64, uimm12s4, "ldrsw",
 def : Pat<(i64 (zextloadi32 (am_indexed32 GPR64sp:$Rn, uimm12s4:$offset))),
       (SUBREG_TO_REG (i64 0), (LDRWui GPR64sp:$Rn, uimm12s4:$offset), sub_32)>;
 
+// load zero-extended word, bitcast to double
+def : Pat <(f64 (bitconvert (i64 (zextloadi32 (am_indexed32 GPR64sp:$Rn, uimm12s4:$offset))))),
+           (INSERT_SUBREG  (f64 (IMPLICIT_DEF)), (LDRSui GPR64sp:$Rn, uimm12s4:$offset), ssub)>;
+    
 // Pre-fetch.
 def PRFMui : PrefetchUI<0b11, 0, 0b10, "prfm",
                         [(AArch64Prefetch timm:$Rt,
@@ -9414,6 +9418,7 @@ def : Pat<(v4i32 (mulhu V128:$Rn, V128:$Rm)),
                              (EXTRACT_SUBREG V128:$Rm, dsub)),
            (UMULLv4i32_v2i64 V128:$Rn, V128:$Rm))>;
 
+
 // Conversions within AdvSIMD types in the same register size are free.
 // But because we need a consistent lane ordering, in big endian many
 // conversions require one or more REV instructions.
@@ -10986,4 +10991,4 @@ defm FMMLA : SIMDThreeSameVectorFP8MatrixMul<"fmmla">;
 include "AArch64InstrAtomics.td"
 include "AArch64SVEInstrInfo.td"
 include "AArch64SMEInstrInfo.td"
-include "AArch64InstrGISel.td"
+include "AArch64InstrGISel.td"
\ No newline at end of file
diff --git a/llvm/test/CodeGen/AArch64/load_u64_from_u32.ll b/llvm/test/CodeGen/AArch64/load_u64_from_u32.ll
new file mode 100644
index 0000000000000..ad30981012112
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/load_u64_from_u32.ll
@@ -0,0 +1,14 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64-linux-gnu -o - %s | FileCheck %s
+
+define double @_Z9load_u64_from_u32_testPj(ptr %n) {
+; CHECK-LABEL: _Z9load_u64_from_u32_testPj:
+; CHECK:       // %bb.0: // %entry
+; CHECK-NEXT:    ldr s0, [x0]
+; CHECK-NEXT:    ret
+entry:
+  %0 = load i32, ptr %n, align 4
+  %conv = zext i32 %0 to i64
+  %1 = bitcast i64 %conv to double
+  ret double %1
+}

Copy link
Collaborator

@davemgreen davemgreen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi - looks like a good patch. Loads/stores can have a number of addressing modes and types that it might be worth trying to fill in too.

; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
; RUN: llc -mtriple=aarch64-linux-gnu -o - %s | FileCheck %s

define double @_Z9load_u64_from_u32_testPj(ptr %n) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add all of these as test cases: https://godbolt.org/z/4cPhr6f7e

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, will get to adding these today. Thanks.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They can probably be part of the same test file (and if you wanted to add tablegen patterns for all the types that would help keep them consistent). Otherwise this is looking good to me.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mind adding tests for each of the cases in the gobolt link (they can be in the same file). We might as well add patterns for all of them too, so that they all work the same.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have all the tests in the same file, as well as all matching patterns. I am waiting for internal approval before I can push. Thanks.

@Amichaxx Amichaxx force-pushed the aarch64-load-opt branch 4 times, most recently from 6835ce4 to fbf8c85 Compare July 4, 2025 11:04
Previously, a load from i32 or i16, followed by zero-extension to i64 and a
bitcast to f64, would emit separate FMOV instructions.

This patch introduces new corresponding TableGen patterns to avoid the unnecessary FMOV.

Tests added:
  - load_u64_from_u32.ll
  - load_u64_from_u16.ll
@Amichaxx Amichaxx force-pushed the aarch64-load-opt branch from fbf8c85 to b5757cb Compare July 4, 2025 11:08
@Amichaxx Amichaxx changed the title [AArch64] Remove redundant fmov instruction in i32 load, zero-extension to i64 and bitcast to f64 [AArch64] Remove redundant FMOV for zero-extended i32/i16 loads to f64 Jul 4, 2025
… where an integer is loaded from memory, zero-extended, and bitcast to a floating-point type. Patterns are added for i8/i16/i32 to f64 and f32. A single test file (load-zext-bitcast.ll) is included for all cases, which combines the previously created test files.
@Amichaxx Amichaxx requested a review from davemgreen July 8, 2025 08:14
@Amichaxx
Copy link
Author

Amichaxx commented Jul 8, 2025

Sorry, didn't mean to open a review request. I was trying to add another reviewer.

@CarolineConcatto CarolineConcatto self-requested a review July 14, 2025 10:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants